AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
32B inference optimization

# 32B inference optimization

AM Thinking V1
Apache-2.0
A 32-billion-parameter dense language model focused on enhancing reasoning capabilities, built upon Qwen 2.5-32B-Base, demonstrating performance comparable to larger MoE models in reasoning benchmarks.
Large Language Model Transformers
A
a-m-team
1,377
153
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase